Goto

Collaborating Authors

 edge detection algorithm


Research on Edge Detection of LiDAR Images Based on Artificial Intelligence Technology

Yang, Haowei, Wang, Liyang, Zhang, Jingyu, Cheng, Yu, Xiang, Ao

arXiv.org Artificial Intelligence

LiDAR works by emitting laser pulses and measuring their reflection times to accurately obtain threedimensional spatial information, thus generating high-resolution point cloud data and images. However, the application of LiDAR images faces numerous challenges, particularly in edge detection, where traditional methods often fail to meet practical needs due to insufficient detection accuracy and high computational complexity.Edge detection, as a crucial step in image processing, directly impacts subsequent tasks such as image segmentation, object recognition, and scene understanding[1]. Accurate edge detection can improve target recognition accuracy, optimize navigation path planning, and enhance environmental perception reliability. Therefore, studying an efficient and accurate LiDAR image edge detection method has significant theoretical value and application prospects.Existing edge detection methods, such as the Canny and Sobel algorithms, perform well on conventional images but often struggle with the unique noise characteristics and data structure of LiDAR images. With the rapid advancement of artificial intelligence technology, deep learning has achieved remarkable results in image processing. However, applying deep learning to LiDAR image edge detection still faces challenges such as complex data preprocessing, high difficulty in model training, and significant computational resource demands. Hence, there is an urgent need for an innovative AI-based edge detection method to address these challenges. This study aims to explore and develop an AI-based edge detection method for LiDAR images. The main research contents include: 1. Reviewing the current state of LiDAR technology and its application in edge detection.


The Effectiveness of Edge Detection Evaluation Metrics for Automated Coastline Detection

O'Sullivan, Conor, Coveney, Seamus, Monteys, Xavier, Dev, Soumyabrata

arXiv.org Artificial Intelligence

We analyse the effectiveness of RMSE, PSNR, SSIM and FOM for evaluating edge detection algorithms used for automated coastline detection. Typically, the accuracy of detected coastlines is assessed visually. This can be impractical on a large scale leading to the need for objective evaluation metrics. Hence, we conduct an experiment to find reliable metrics. We apply Canny edge detection to 95 coastline satellite images across 49 testing locations. We vary the Hysteresis thresholds and compare metric values to a visual analysis of detected edges. We found that FOM was the most reliable metric for selecting the best threshold. It could select a better threshold 92.6% of the time and the best threshold 66.3% of the time. This is compared RMSE, PSNR and SSIM which could select the best threshold 6.3%, 6.3% and 11.6% of the time respectively. We provide a reason for these results by reformulating RMSE, PSNR and SSIM in terms of confusion matrix measures. This suggests these metrics not only fail for this experiment but are not useful for evaluating edge detection in general.


Automated Coastline Extraction Using Edge Detection Algorithms

O'Sullivan, Conor, Coveney, Seamus, Monteys, Xavier, Dev, Soumyabrata

arXiv.org Artificial Intelligence

We analyse the effectiveness of edge detection algorithms for the purpose of automatically extracting coastlines from satellite images. Four algorithms - Canny, Sobel, Scharr and Prewitt are compared visually and using metrics. With an average SSIM of 0.8, Canny detected edges that were closest to the reference edges. However, the algorithm had difficulty distinguishing noisy edges, e.g. due to development, from coastline edges. In addition, histogram equalization and Gaussian blur were shown to improve the effectiveness of the edge detection algorithms by up to 1.5 and 1.6 times respectively.


Finding Whales in Ocean Water: Edge Detection, Blob Processing, and RGB Channels in C#

@machinelearnbot

A Visual Studio 2013 demo project including all of the code in this article can be downloaded using the links in the resources section below. I was recently tasked with trying to isolate whales within an image of ocean water using C#. While building machine learning models, the question was raised: "Does ocean water add noise in a model to detect a specific whale while it is swimming in the ocean?" This was the primary question posed. However, before it could be studied, a method was needed to isolate various whales within many images of ocean water.


Notes on a New Philosophy of Empirical Science

Burfoot, Daniel

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.